Pii: S0893-6080(97)00023-3

نویسنده

  • ISAO TOKUDA
چکیده

-This paper studies global bifurcation structure of the chaotic neural networks applied to solve the traveling salesman problem (TSP). The bifurcation analysis clarifies the dynamical basis of the chaotic neuro-dynamics which itinerates a variety of network states associated with possible solutions of TSP and efficiently 'searches'for the optimum or near-optimum solutions. By following the detailed merging process of chaotic attractors via crises, we find that the crisis-induced intermittent switches among the ruins of the previous localized chaotic attractors underly the 'chaotic search 'for TSP solutions. On the basis of the present stud),, efficiency of the 'chaotic search' to optimization problems is discussed and a guideline is provided for tuning the bifurcation parameter value which gives rise to efficient 'chaotic search '. © 1997 Elsevier Science Ltd. All rights reserved. Keywords--Chaotic neural network, Traveling salesman problem, Chaos, Bifurcation, Chaotic search, Chaotic itinerancy, Symmetry-increasing bifurcation. 1. I N T R O D U C T I O N The traveling salesman problem (TSP) is a classic and famous example of a combinatorial optimization problem which is hard to deal with. Computational time required to find an exactly optimum solution grows faster than any finite power of some appropriate measure of the problem size as long as P 4= NP (see e.g., Garey & Johnson, 1979; Lawler, Lenstra, Rinnooy Kan, & Shmoys, 1985; Reeves, 1993). In order to cope with such difficult problems, efficient approximate algorithms for finding a near-optimum solution within a reasonable computational time have been searched for. As one of such methods, this paper focuses on an intriguing optimization Acknowledgenmnts: The authors wish to thank Professor T. Matsumoto (Waseda University), Professor I. Tsuda (Hokkaido University), Professor H. Kokubu (Kyoto University), and Professor R. Tokunaga (Tsukuba University) for stimulating discussions and continual encouragement on this work. They would also like to thank Professor M. Kubo (Tokyo University of Mercantile Marine) for his generous guidance to modern TSP studies and Professor K. Ikeda (Ritsumeikan University) for valuable comments on the original manuscript. They are also grateful to the reviewers of Neural Networks for careful reading of the manuscript, precious comments on the present study from the viewpoint of chaotic itinerancy, and constructive suggestions on our experiments. Requests for reprints should be sent to: Isao Tokuda, Department of Computer Science and Systems Engineering, Muroran Institute of Technology, Muroran, Hokkaido 050, Japan. Tel: +81-143-47-3281 ; Fax: +81-143-47-3374; E-mail: [email protected]. technique of TSP by chaotic dynamics (Nozawa, 1992, 1994; Yamada, Aihara & Kotani, 1993; Yamada & Aihara, 1994; Chen & Aihara, 1995) based on chaotic neural networks (Aihara, Takabe, & Toyoola, 1990). In the neural network approach to TSP, every possible solution of the TSP is mapped into a network of neurons with (0,1)-binary outputs (Hopfield & Tank, 1985). Optimization by chaotic dynamics searches for a better TSP solution by following a chaotic wandering orbit. By visiting a variety of network states which correspond to possible solutions of the TSP, chaotic dynamics continually keeps searching for a better solution. The chaotic dynamics in neural network models has been discussed in the earlier studies in the light of its potential biological functional role (Skarda & Freeman, 1987; Tsuda, Koerner, & Shimizu, 1987; Yao & Freeman, 1987; Aihara et al., 1990; Aihara, 1990; Tsuda, 1991b, 1992; Adachi & Aihara, 1996; Nagashima, Shiroki, & Tokuda, 1996). Freeman et al. suggested a functional role of chaos as a novelty filter when a rabbit memorizes a new odor (Skarda & Freeman, 1987; Yao & Freeman, 1987). Several researchers (Tsuda et al. 1987; Tsuda, 1992; Aihara, 1990; Adachi & Aihara 1996; Nagashima et al. 1996) observed chaotic neuro-dynamics which successively retrieves memory states stored with a Hebbian or auto-associative matrix of synaptic connections. Such dynamical process can be understood as the 'memory search' mechanism by

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Additive neural networks and periodic patterns

In this contribution we discuss weight selection which allows additive neural networks to represent certain periodic patterns. Given a periodic set of vectors V(l) whose components are v(i)(l)=+/-1 we measure correlation between i-th and j-th components of V(l) in time l. We show that in the additive neural net with weights chosen based on this correlation, almost all trajectories converge to a...

متن کامل

Pii: S0893-6080(97)00012-9

Kohonen’s learning vector quantization (LVQ)is modifiedby attributingtrainingcountersto eachneuron, whichrecordits trainingstatistics.Duringtraining,thisallowsfor dynamicself-allocationof theneuronsto classes.In the classificationstage trainingcountersprovidean estimateof the reliabilityof classificationof the singleneurons, whichcan be exploitedto obtaina substantiallyhigherpurity of classi$ca...

متن کامل

Regularization with a Pruning Prior

We investigate the use of a regularization prior and its pruning properties. We illustrate the behavior of this prior by conducting analyses both using a Bayesian framework and with the generalization method, on a simple toy problem. Results are thoroughly compared with those obtained with a traditional weight decay. Copyright 1997 Elsevier Science Ltd.

متن کامل

Precision Requirements for Closed-Loop Kinematic Robotic Control Using Linear Local Mappings

Neural networks are approximation techniques that can be characterized by adaptability rather than by precision. For feedback systems, high precision can still be acquired in presence of errors. Within a general iterative framework of closed-loop kinematic robotic control using linear local modeling, the inverse Jacobian matrix error and the maximum length of the displacement for which the line...

متن کامل

Estimates of the Number of Hidden Units and Variation with Respect to Half-Spaces

We estimate variation with respect to half-spaces in terms of "flows through hyperplanes". Our estimate is derived from an integral representation for smooth compactly supported multivariable functions proved using properties of the Heaviside and delta distributions. Consequently we obtain conditions which guarantee approximation error rate of order O by one-hidden-layer networks with n sigmoid...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2003